Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Landslides pose a significant hazard worldwide. Despite advances in landslide monitoring, predicting their size, timing, and location remains a major challenge. We revisit the 2017 Mud Creek landslide in California using radar interferometry, pixel tracking, and elevation change measurements from satellite and airborne radar, lidar, and optical data. Our analysis shows that pixel tracking of optical imagery captured the transition from slow motion to runaway acceleration starting ~ 1 month before catastrophic failure—an acceleration undetected by satellite InSAR alone. Strain rate maps revealed a new slip surface formed within the landslide body during acceleration, likely a key weakening mechanism. Failure forecast analysis indicates the acceleration followed a hyperbolic trend, suggesting failure time could have been predicted at least 6 days in advance. We also inverted for the landslide thickness during the slow-moving phase and found variations from < 1 to 36 m. While thickness inversions provide important first-order information on landslide size, more work is needed to better understand how landslide subsurface properties and deforming volumes may evolve during the transition from slow-to-fast motion. Our findings underscore the need for integrated remote sensing techniques to improve landslide monitoring and forecasting. Future advancements in operational monitoring systems and big data analysis will be critical for tracking slope instability and improving regional-scale failure predictions.more » « less
-
Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency.Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.more » « less
-
Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency. Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.more » « less
-
Concerns about the risks posed by artificial intelligence (AI) have resulted in growing interest in algorithmic transparency. While algorithmic transparency is well-studied, there is evidence that many organizations do not value implementing transparency. In this case study, we test a ground-up approach to ensuring better real-world algorithmic transparency by creating transparency influencers — motivated individuals within organizations who advocate for transparency. We held an interactive online workshop on algorithmic transparency and advocacy for 15 professionals from news, media, and journalism. We reflect on workshop design choices and presents insights from participant interviews. We found positive evidence for our approach: In the days following the workshop, three participants had done pro-transparency advocacy. Notably, one of them advocated for algorithmic transparency at an organization-wide AI strategy meeting. In the words of a participant: “if you are questioning whether or not you need to tell people [about AI], you need to tell people.”more » « less
-
Algorithmic recourse, or providing recommendations to individuals who receive an unfavorable outcome from an algorithmic system on how they can take action and change that outcome, is an important tool for giving individuals agency against algorithmic decision systems. Unfortunately, research on algorithmic recourse faces a fundamental challenge: there are no publicly available datasets on algorithmic recourse. In this work, we begin to explore a solution to this challenge by creating an agent-based simulation called The Game of Recourse (an homage to Conway's Game of Life) to synthesize realistic algorithmic recourse data. We designed The Game of Recourse with a focus on reliability and fairness, two areas of critical importance in socio-technical systems.more » « less
-
Abstract In their article entitled “Trapdoor Fault Activation:A Step Toward Caldera Collapse at Sierra Negra,Galapagos,Ecuador” Shreve and Delgado (2023,https://doi.org/10.1029/2023jb026437) examine co‐eruptive deformation during the 2018 eruption of Sierra Negra Volcano. One of their major conclusions is that the 2018 eruption, and specifically co‐eruptive faulting, represents the initial stages of caldera collapse. They reach this conclusion because they focus their analysis solely on co‐eruptive deformation, and do not investigate the total (net) deformation for the 2005 to 2018 eruption cycle. Bell, La Famina, et al. (2021,https://doi.org/10.1038/s41467‐021‐21596‐4) investigated both the pre‐ and co‐eruptive phases of the 2018 eruption and showed that net deformation was one of caldera resurgence, not subsidence. In this comment, we demonstrate that the conclusion of collapse, or even initiation of collapse, is attributable to not accounting for pre‐eruptive deformation on the intra‐caldera Trapdoor Fault system and incorrectly assuming that the volcano‐tectonic dynamics of Sierra Negra mimic those of other basaltic calderas.more » « less
-
Algorithmic systems are often called upon to assist in high-stakes decision making. In light of this, algorithmic recourse, the principle wherein individuals should be able to take action against an undesirable outcome made by an algorithmic system, is receiving growing attention. The bulk of the literature on algorithmic recourse to-date focuses primarily on how to provide recourse to a single individual, overlooking a critical element: the effects of a continuously changing context. Disregarding these effects on recourse is a significant oversight, since, in almost all cases, recourse consists of an individual making a first, unfavorable attempt, and then being given an opportunity to make one or several attempts at a later date — when the context might have changed. This can create false expectations, as initial recourse recommendations may become less reliable over time due to model drift and competition for access to the favorable outcome between individuals. In this work we propose an agent-based simulation framework for studying the effects of a continuously changing environment on algorithmic recourse. In particular, we identify two main effects that can alter the reliability of recourse for individuals represented by the agents: (1) competition with other agents acting upon recourse, and (2) competition with new agents entering the environment. Our findings highlight that only a small set of specific parameterizations result in algorithmic recourse that is reliable for agents over time. Consequently, we argue that substantial additional work is needed to understand recourse reliability over time, and to develop recourse methods that reward agents’ effort.more » « less
-
The “impossibility theorem” — which is considered foundational in algorithmic fairness literature — asserts that there must be trade-offs between common notions of fairness and performance when fitting statistical models, except in two special cases: when the prevalence of the outcome being predicted is equal across groups, or when a perfectly accurate predictor is used. However, theory does not always translate to practice. In this work, we challenge the implications of the impossibility theorem in practical settings. First, we show analytically that, by slightly relaxing the impossibility theorem (to accommodate a practitioner’s perspective of fairness), it becomes possible to identify abundant sets of models that satisfy seemingly incompatible fairness constraints. Second, we demonstrate the existence of these models through extensive experiments on five real-world datasets. We conclude by offering tools and guidance for practitioners to understand when — and to what degree — fairness along multiple criteria can be achieved. This work has an important implication for the community: achieving fairness along multiple metrics for multiple groups (and their intersections) is much more possible than was previously believed.more » « less
An official website of the United States government

Full Text Available